Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
1.
PLoS One ; 19(3): e0300333, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38478503

RESUMO

Many journals in ecology and evolutionary biology encourage or require authors to make their data and code available alongside articles. In this study we investigated how often this data and code could be used together, when both were available, to computationally reproduce results published in articles. We surveyed the data and code sharing practices of 177 meta-analyses published in ecology and evolutionary biology journals published between 2015-17: 60% of articles shared data only, 1% shared code only, and 15% shared both data and code. In each of the articles which had shared both (n = 26), we selected a target result and attempted to reproduce it. Using the shared data and code files, we successfully reproduced the targeted results in 27-73% of the 26 articles, depending on the stringency of the criteria applied for a successful reproduction. The results from this sample of meta-analyses in the 2015-17 literature can provide a benchmark for future meta-research studies gauging the computational reproducibility of published research in ecology and evolutionary biology.


Assuntos
Ecologia , Publicações , Reprodutibilidade dos Testes , Evolução Biológica
2.
Account Res ; : 1-28, 2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38299475

RESUMO

BACKGROUND: Despite wide recognition of the benefits of sharing research data, public availability rates have not increased substantially in oncology or medicine more broadly over the last decade. METHODS: We surveyed 285 cancer researchers to determine their prior experience with sharing data and views on known drivers and inhibitors. RESULTS: We found that 45% of respondents had shared some data from their most recent empirical publication, with respondents who typically studied non-human research participants, or routinely worked with human genomic data, more likely to share than those who did not. A third of respondents added that they had previously shared data privately, with 74% indicating that doing so had also led to authorship opportunities or future collaborations for them. Journal and funder policies were reported to be the biggest general drivers toward sharing, whereas commercial interests, agreements with industrial sponsors and institutional policies were the biggest prohibitors. We show that researchers' decisions about whether to share data are also likely to be influenced by participants' desires. CONCLUSIONS: Our survey suggests that increased promotion and support by research institutions, alongside greater championing of data sharing by journals and funders, may motivate more researchers in oncology to share their data.

4.
BMJ ; 382: e075767, 2023 07 11.
Artigo em Inglês | MEDLINE | ID: mdl-37433624

RESUMO

OBJECTIVES: To synthesise research investigating data and code sharing in medicine and health to establish an accurate representation of the prevalence of sharing, how this frequency has changed over time, and what factors influence availability. DESIGN: Systematic review with meta-analysis of individual participant data. DATA SOURCES: Ovid Medline, Ovid Embase, and the preprint servers medRxiv, bioRxiv, and MetaArXiv were searched from inception to 1 July 2021. Forward citation searches were also performed on 30 August 2022. REVIEW METHODS: Meta-research studies that investigated data or code sharing across a sample of scientific articles presenting original medical and health research were identified. Two authors screened records, assessed the risk of bias, and extracted summary data from study reports when individual participant data could not be retrieved. Key outcomes of interest were the prevalence of statements that declared that data or code were publicly or privately available (declared availability) and the success rates of retrieving these products (actual availability). The associations between data and code availability and several factors (eg, journal policy, type of data, trial design, and human participants) were also examined. A two stage approach to meta-analysis of individual participant data was performed, with proportions and risk ratios pooled with the Hartung-Knapp-Sidik-Jonkman method for random effects meta-analysis. RESULTS: The review included 105 meta-research studies examining 2 121 580 articles across 31 specialties. Eligible studies examined a median of 195 primary articles (interquartile range 113-475), with a median publication year of 2015 (interquartile range 2012-2018). Only eight studies (8%) were classified as having a low risk of bias. Meta-analyses showed a prevalence of declared and actual public data availability of 8% (95% confidence interval 5% to 11%) and 2% (1% to 3%), respectively, between 2016 and 2021. For public code sharing, both the prevalence of declared and actual availability were estimated to be <0.5% since 2016. Meta-regressions indicated that only declared public data sharing prevalence estimates have increased over time. Compliance with mandatory data sharing policies ranged from 0% to 100% across journals and varied by type of data. In contrast, success in privately obtaining data and code from authors historically ranged between 0% and 37% and 0% and 23%, respectively. CONCLUSIONS: The review found that public code sharing was persistently low across medical research. Declarations of data sharing were also low, increasing over time, but did not always correspond to actual sharing of data. The effectiveness of mandatory data sharing policies varied substantially by journal and type of data, a finding that might be informative for policy makers when designing policies and allocating resources to audit compliance. SYSTEMATIC REVIEW REGISTRATION: Open Science Framework doi:10.17605/OSF.IO/7SX8U.


Assuntos
Pesquisa Biomédica , Medicina , Humanos , Prevalência , Pessoal Administrativo , Disseminação de Informação
5.
R Soc Open Sci ; 10(6): 221553, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37293358

RESUMO

This paper explores judgements about the replicability of social and behavioural sciences research and what drives those judgements. Using a mixed methods approach, it draws on qualitative and quantitative data elicited from groups using a structured approach called the IDEA protocol ('investigate', 'discuss', 'estimate' and 'aggregate'). Five groups of five people with relevant domain expertise evaluated 25 research claims that were subject to at least one replication study. Participants assessed the probability that each of the 25 research claims would replicate (i.e. that a replication study would find a statistically significant result in the same direction as the original study) and described the reasoning behind those judgements. We quantitatively analysed possible correlates of predictive accuracy, including self-rated expertise and updating of judgements after feedback and discussion. We qualitatively analysed the reasoning data to explore the cues, heuristics and patterns of reasoning used by participants. Participants achieved 84% classification accuracy in predicting replicability. Those who engaged in a greater breadth of reasoning provided more accurate replicability judgements. Some reasons were more commonly invoked by more accurate participants, such as 'effect size' and 'reputation' (e.g. of the field of research). There was also some evidence of a relationship between statistical literacy and accuracy.

6.
PLoS One ; 18(1): e0274429, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36701303

RESUMO

As replications of individual studies are resource intensive, techniques for predicting the replicability are required. We introduce the repliCATS (Collaborative Assessments for Trustworthy Science) process, a new method for eliciting expert predictions about the replicability of research. This process is a structured expert elicitation approach based on a modified Delphi technique applied to the evaluation of research claims in social and behavioural sciences. The utility of processes to predict replicability is their capacity to test scientific claims without the costs of full replication. Experimental data supports the validity of this process, with a validation study producing a classification accuracy of 84% and an Area Under the Curve of 0.94, meeting or exceeding the accuracy of other techniques used to predict replicability. The repliCATS process provides other benefits. It is highly scalable, able to be deployed for both rapid assessment of small numbers of claims, and assessment of high volumes of claims over an extended period through an online elicitation platform, having been used to assess 3000 research claims over an 18 month period. It is available to be implemented in a range of ways and we describe one such implementation. An important advantage of the repliCATS process is that it collects qualitative data that has the potential to provide insight in understanding the limits of generalizability of scientific claims. The primary limitation of the repliCATS process is its reliance on human-derived predictions with consequent costs in terms of participant fatigue although careful design can minimise these costs. The repliCATS process has potential applications in alternative peer review and in the allocation of effort for replication studies.


Assuntos
Ciências do Comportamento , Confiabilidade dos Dados , Humanos , Reprodutibilidade dos Testes , Custos e Análise de Custo , Revisão por Pares
7.
BMC Med ; 20(1): 438, 2022 11 09.
Artigo em Inglês | MEDLINE | ID: mdl-36352426

RESUMO

BACKGROUND: Various stakeholders are calling for increased availability of data and code from cancer research. However, it is unclear how commonly these products are shared, and what factors are associated with sharing. Our objective was to evaluate how frequently oncology researchers make data and code available and explore factors associated with sharing. METHODS: A cross-sectional analysis of a random sample of 306 cancer-related articles indexed in PubMed in 2019 which studied research subjects with a cancer diagnosis was performed. All articles were independently screened for eligibility by two authors. Outcomes of interest included the prevalence of affirmative sharing declarations and the rate with which declarations connected to data complying with key FAIR principles (e.g. posted to a recognised repository, assigned an identifier, data license outlined, non-proprietary formatting). We also investigated associations between sharing rates and several journal characteristics (e.g. sharing policies, publication models), study characteristics (e.g. cancer rarity, study design), open science practices (e.g. pre-registration, pre-printing) and subsequent citation rates between 2020 and 2021. RESULTS: One in five studies declared data were publicly available (59/306, 19%, 95% CI: 15-24%). However, when data availability was investigated this percentage dropped to 16% (49/306, 95% CI: 12-20%), and then to less than 1% (1/306, 95% CI: 0-2%) when data were checked for compliance with key FAIR principles. While only 4% of articles that used inferential statistics reported code to be available (10/274, 95% CI: 2-6%), the odds of reporting code to be available were 5.6 times higher for researchers who shared data. Compliance with mandatory data and code sharing policies was observed in 48% (14/29) and 0% (0/6) of articles, respectively. However, 88% of articles (45/51) included data availability statements when required. Policies that encouraged data sharing did not appear to be any more effective than not having a policy at all. The only factors associated with higher rates of data sharing were studying rare cancers and using publicly available data to complement original research. CONCLUSIONS: Data and code sharing in oncology occurs infrequently, and at a lower rate than would be expected given the prevalence of mandatory sharing policies. There is also a large gap between those declaring data to be available, and those archiving data in a way that facilitates its reuse. We encourage journals to actively check compliance with sharing policies, and researchers consult community-accepted guidelines when archiving the products of their research.


Assuntos
Disseminação de Informação , Neoplasias , Humanos , Estudos Transversais , Oncologia , Projetos de Pesquisa , Neoplasias/diagnóstico , Neoplasias/epidemiologia
8.
BMC Res Notes ; 15(1): 127, 2022 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-35382867

RESUMO

Journal peer review regulates the flow of ideas through an academic discipline and thus has the power to shape what a research community knows, actively investigates, and recommends to policymakers and the wider public. We might assume that editors can identify the 'best' experts and rely on them for peer review. But decades of research on both expert decision-making and peer review suggests they cannot. In the absence of a clear criterion for demarcating reliable, insightful, and accurate expert assessors of research quality, the best safeguard against unwanted biases and uneven power distributions is to introduce greater transparency and structure into the process. This paper argues that peer review would therefore benefit from applying a series of evidence-based recommendations from the empirical literature on structured expert elicitation. We highlight individual and group characteristics that contribute to higher quality judgements, and elements of elicitation protocols that reduce bias, promote constructive discussion, and enable opinions to be objectively and transparently aggregated.


Assuntos
Revisão por Pares
9.
Annu Rev Psychol ; 73: 719-748, 2022 01 04.
Artigo em Inglês | MEDLINE | ID: mdl-34665669

RESUMO

Replication-an important, uncommon, and misunderstood practice-is gaining appreciation in psychology. Achieving replicability is important for making research progress. If findings are not replicable, then prediction and theory development are stifled. If findings are replicable, then interrogation of their meaning and validity can advance knowledge. Assessing replicability can be productive for generating and testing hypotheses by actively confronting current understandings to identify weaknesses and spur innovation. For psychology, the 2010s might be characterized as a decade of active confrontation. Systematic and multi-site replication projects assessed current understandings and observed surprising failures to replicate many published findings. Replication efforts highlighted sociocultural challenges such as disincentives to conduct replications and a tendency to frame replication as a personal attack rather than a healthy scientific practice, and they raised awareness that replication contributes to self-correction. Nevertheless, innovation in doing and understanding replication and its cousins, reproducibility and robustness, has positioned psychology to improve research practices and accelerate progress.


Assuntos
Projetos de Pesquisa , Humanos , Reprodutibilidade dos Testes
10.
F1000Res ; 10: 491, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34631024

RESUMO

Numerous studies have demonstrated low but increasing rates of data and code sharing within medical and health research disciplines. However it remains unclear how commonly data and code are shared across all fields of medical and health research, as well as whether sharing rates are positively associated with implementation of progressive policies by publishers and funders, or growing expectations from the medical and health research community at large. Therefore this systematic review aims to synthesise the findings of medical and health science studies that have empirically investigated the prevalence of data or code sharing, or both. Objectives include the investigation of: (i) the prevalence of public sharing of research data and code alongside published articles (including preprints), (ii) the prevalence of private sharing of research data and code in response to reasonable requests, and (iii) factors associated with the sharing of either research output (e.g., the year published, the publisher's policy on sharing, the presence of a data or code availability statement). It is hoped that the results will provide some insight into how often research data and code are shared publicly and privately, how this has changed over time, and how effective some measures such as the institution of data sharing policies and data availability statements have been in motivating researchers to share their underlying data and code.


Assuntos
Disseminação de Informação , Publicações , Análise de Dados , Humanos , Metanálise como Assunto , Pesquisadores , Revisões Sistemáticas como Assunto
11.
Syst Rev ; 10(1): 112, 2021 04 16.
Artigo em Inglês | MEDLINE | ID: mdl-33863381

RESUMO

BACKGROUND: Investigations of transparency, reproducibility and replicability in science have been directed largely at individual studies. It is just as critical to explore these issues in syntheses of studies, such as systematic reviews, given their influence on decision-making and future research. We aim to explore various aspects relating to the transparency, reproducibility and replicability of several components of systematic reviews with meta-analysis of the effects of health, social, behavioural and educational interventions. METHODS: The REPRISE (REProducibility and Replicability In Syntheses of Evidence) project consists of four studies. We will evaluate the completeness of reporting and sharing of review data, analytic code and other materials in a random sample of 300 systematic reviews of interventions published in 2020 (Study 1). We will survey authors of systematic reviews to explore their views on sharing review data, analytic code and other materials and their understanding of and opinions about replication of systematic reviews (Study 2). We will then evaluate the extent of variation in results when we (a) independently reproduce meta-analyses using the same computational steps and analytic code (if available) as used in the original review (Study 3), and (b) crowdsource teams of systematic reviewers to independently replicate a subset of methods (searches for studies, selection of studies for inclusion, collection of outcome data, and synthesis of results) in a sample of the original reviews; 30 reviews will be replicated by 1 team each and 2 reviews will be replicated by 15 teams (Study 4). DISCUSSION: The REPRISE project takes a systematic approach to determine how reliable systematic reviews of interventions are. We anticipate that results of the REPRISE project will inform strategies to improve the conduct and reporting of future systematic reviews.


Assuntos
Projetos de Pesquisa , Humanos , Metanálise como Assunto , Reprodutibilidade dos Testes , Revisões Sistemáticas como Assunto
12.
BMC Biol ; 19(1): 68, 2021 04 09.
Artigo em Inglês | MEDLINE | ID: mdl-33836762

RESUMO

Unreliable research programmes waste funds, time, and even the lives of the organisms we seek to help and understand. Reducing this waste and increasing the value of scientific evidence require changing the actions of both individual researchers and the institutions they depend on for employment and promotion. While ecologists and evolutionary biologists have somewhat improved research transparency over the past decade (e.g. more data sharing), major obstacles remain. In this commentary, we lift our gaze to the horizon to imagine how researchers and institutions can clear the path towards more credible and effective research programmes.


Assuntos
Evolução Biológica , Ecossistema
13.
Elife ; 92020 11 19.
Artigo em Inglês | MEDLINE | ID: mdl-33211009

RESUMO

Peer review practices differ substantially between journals and disciplines. This study presents the results of a survey of 322 editors of journals in ecology, economics, medicine, physics and psychology. We found that 49% of the journals surveyed checked all manuscripts for plagiarism, that 61% allowed authors to recommend both for and against specific reviewers, and that less than 6% used a form of open peer review. Most journals did not have an official policy on altering reports from reviewers, but 91% of editors identified at least one situation in which it was appropriate for an editor to alter a report. Editors were also asked for their views on five issues related to publication ethics. A majority expressed support for co-reviewing, reviewers requesting access to data, reviewers recommending citations to their work, editors publishing in their own journals, and replication studies. Our results provide a window into what is largely an opaque aspect of the scientific process. We hope the findings will inform the debate about the role and transparency of peer review in scholarly publishing.


Assuntos
Políticas Editoriais , Revisão por Pares , Publicações Periódicas como Assunto , Humanos , Inquéritos e Questionários
14.
Ecol Evol ; 10(12): 5197-5207, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32607143

RESUMO

Recent large-scale projects in other disciplines have shown that results often fail to replicate when studies are repeated. The conditions contributing to this problem are also present in ecology, but there have not been any equivalent replication projects. Here, we survey ecologists' understanding of and opinions about replication studies. The majority of ecologists in our sample considered replication studies to be important (97%), not prevalent enough (91%), worth funding even given limited resources (61%), and suitable for publication in all journals (62%). However, there is a disconnect between this enthusiasm and the prevalence of direct replication studies in the literature which is much lower (0.023%: Kelly 2019) than our participants' median estimate of 10%. This may be explained by the obstacles our participants identified including the difficulty of conducting replication studies and of funding and publishing them. We conclude by offering suggestions for how replications could be better integrated into ecological research.

15.
Conserv Biol ; 34(5): 1131-1141, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32043648

RESUMO

Communication and advocacy approaches that influence attitudes and behaviors are key to addressing conservation problems, and the way an issue is framed can affect how people view, judge, and respond to an issue. Responses to conservation interventions can also be influenced by subtle wording changes in statements that may appeal to different values, activate social norms, influence a person's affect or mood, or trigger certain biases, each of which can differently influence the resulting engagement, attitudes, and behavior. We contend that by strategically considering how conservation communications are framed, they can be made more effective with little or no additional cost. Key framing considerations include, emphasizing things that matter to the audience, evoking helpful social norms, reducing psychological distance, leveraging useful biases, and, where practicable, testing messages. These lessons will help communicators think strategically about how to frame messages for greater effect.


Cinco Lecciones para Dirigir un Encuadre Más Efectivo del Mensaje de Conservación de la Biodiversidad Resumen Las estrategias de comunicación y defensa que influyen sobre las actitudes y comportamientos son muy importantes para abordar los problemas de conservación. La manera en la que se encuadra un tema puede afectar cómo las personas lo ven, lo juzgan y cómo responden a él. Las respuestas a las intervenciones de conservación también pueden estar influenciadas por cambios sutiles en la redacción de las declaraciones que pueden exhortar a valores distintos, activar las normas sociales, influir sobre el afecto o humor de una persona o producir ciertos sesgos, cada uno de los cuales puede influir de manera diferente sobre la participación, el comportamiento y las actitudes resultantes. Sostenemos que al considerar estratégicamente cómo se encuadra la comunicación de la conservación, ésta puede volverse más efectiva con muy poco o ningún valor adicional. Algunas consideraciones importantes son el énfasis en las cosas que le importan al público, la evocación de las normas sociales útiles, la reducción de la distancia psicológica, el aprovechamiento de los sesgos útiles y, en donde pueda practicarse, el ensayo de mensajes. Estas lecciones ayudarán a los comunicadores a pensar estratégicamente sobre cómo encuadrar sus mensajes para obtener un mayor efecto.


Assuntos
Biodiversidade , Conservação dos Recursos Naturais , Atitude , Comunicação , Humanos
16.
PLoS One ; 14(4): e0213522, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30995242

RESUMO

People interpret verbal expressions of probabilities (e.g. 'very likely') in different ways, yet words are commonly preferred to numbers when communicating uncertainty. Simply providing numerical translations alongside reports or text containing verbal probabilities should encourage consistency, but these guidelines are often ignored. In an online experiment with 924 participants, we compared four different formats for presenting verbal probabilities with the numerical guidelines used in the US Intelligence Community Directive (ICD) 203 to see whether any could improve the correspondence between the intended meaning and participants' interpretation ('in-context'). This extends previous work in the domain of climate science. The four experimental conditions we tested were: 1. numerical guidelines bracketed in text, e.g. X is very unlikely (05-20%), 2. click to see the full guidelines table in a new window, 3. numerical guidelines appear in a mouse over tool tip, and 4. no guidelines provided (control). Results indicate that correspondence with the ICD 203 standard is substantially improved only when numerical guidelines are bracketed in text. For this condition, average correspondence was 66%, compared with 32% in the control. We also elicited 'context-free' numerical judgements from participants for each of the seven verbal probability expressions contained in ICD 203 (i.e., we asked participants what range of numbers they, personally, would assign to those expressions), and constructed 'evidence-based lexicons' based on two methods from similar research, 'membership functions' and 'peak values', that reflect our large sample's intuitive translations of the terms. Better aligning the intended and assumed meaning of fuzzy words like 'unlikely' can reduce communication problems between the reporter and receiver of probabilistic information. In turn, this can improve decision making under uncertainty.


Assuntos
Confusão , Tomada de Decisões/fisiologia , Julgamento/fisiologia , Incerteza , Comportamento Verbal/fisiologia , Adulto , Feminino , Humanos , Masculino
17.
PLoS One ; 13(7): e0200303, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30011289

RESUMO

We surveyed 807 researchers (494 ecologists and 313 evolutionary biologists) about their use of Questionable Research Practices (QRPs), including cherry picking statistically significant results, p hacking, and hypothesising after the results are known (HARKing). We also asked them to estimate the proportion of their colleagues that use each of these QRPs. Several of the QRPs were prevalent within the ecology and evolution research community. Across the two groups, we found 64% of surveyed researchers reported they had at least once failed to report results because they were not statistically significant (cherry picking); 42% had collected more data after inspecting whether results were statistically significant (a form of p hacking) and 51% had reported an unexpected finding as though it had been hypothesised from the start (HARKing). Such practices have been directly implicated in the low rates of reproducible results uncovered by recent large scale replication studies in psychology and other disciplines. The rates of QRPs found in this study are comparable with the rates seen in psychology, indicating that the reproducibility problems discovered in psychology are also likely to be present in ecology and evolution.


Assuntos
Evolução Biológica , Ecologia , Pesquisa , Humanos , Reprodutibilidade dos Testes , Projetos de Pesquisa , Pesquisadores/estatística & dados numéricos , Má Conduta Científica/estatística & dados numéricos , Estatística como Assunto , Inquéritos e Questionários
18.
PLoS One ; 13(6): e0198468, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29933407

RESUMO

INTRODUCTION: Natural resource management uses expert judgement to estimate facts that inform important decisions. Unfortunately, expert judgement is often derived by informal and largely untested protocols, despite evidence that the quality of judgements can be improved with structured approaches. We attribute the lack of uptake of structured protocols to the dearth of illustrative examples that demonstrate how they can be applied within pressing time and resource constraints, while also improving judgements. AIMS AND METHODS: In this paper, we demonstrate how the IDEA protocol for structured expert elicitation may be deployed to overcome operational challenges while improving the quality of judgements. The protocol was applied to the estimation of 14 future abiotic and biotic events on the Great Barrier Reef, Australia. Seventy-six participants with varying levels of expertise related to the Great Barrier Reef were recruited and allocated randomly to eight groups. Each participant provided their judgements using the four-step question format of the IDEA protocol ('Investigate', 'Discuss', 'Estimate', 'Aggregate') through remote elicitation. When the events were realised, the participant judgements were scored in terms of accuracy, calibration and informativeness. RESULTS AND CONCLUSIONS: The results demonstrate that the IDEA protocol provides a practical, cost-effective, and repeatable approach to the elicitation of quantitative estimates and uncertainty via remote elicitation. We emphasise that i) the aggregation of diverse individual judgements into pooled group judgments almost always outperformed individuals, and ii) use of a modified Delphi approach helped to remove linguistic ambiguity, and further improved individual and group judgements. Importantly, the protocol encourages review, critical appraisal and replication, each of which is required if judgements are to be used in place of data in a scientific context. The results add to the growing body of literature that demonstrates the merit of using structured elicitation protocols. We urge decision-makers and analysts to use insights and examples to improve the evidence base of expert judgement in natural resource management.


Assuntos
Tomada de Decisões , Austrália , Análise Custo-Benefício , Feminino , Humanos , Julgamento , Masculino , Recursos Naturais , Distribuição Aleatória
19.
Nat Ecol Evol ; 2(6): 929-935, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29789547

RESUMO

Peer review is widely considered fundamental to maintaining the rigour of science, but it often fails to ensure transparency and reduce bias in published papers, and this systematically weakens the quality of published inferences. In part, this is because many reviewers are unaware of important questions to ask with respect to the soundness of the design and analyses, and the presentation of the methods and results; also some reviewers may expect others to be responsible for these tasks. We therefore present a reviewers' checklist of ten questions that address these critical components. Checklists are commonly used by practitioners of other complex tasks, and we see great potential for the wider adoption of checklists for peer review, especially to reduce bias and facilitate transparency in published papers. We expect that such checklists will be well received by many reviewers.


Assuntos
Lista de Checagem , Políticas Editoriais , Revisão por Pares/normas , Publicações Periódicas como Assunto , Ecologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...